Spatially varying spectral modulation can be implemented using a liquid crystal spatial light modulator (SLM) since it provides an array of liquid crystal cells, each of which can be purposed to act as a programmable spectral filter array. However, such an optical setup suffers from strong optical aberrations due to the unintended phase modulation, precluding spectral modulation at high spatial resolutions. In this work, we propose a novel computational approach for the practical implementation of phase SLMs for implementing spatially varying spectral filters. We provide a careful and systematic analysis of the aberrations arising out of phase SLMs for the purposes of spatially varying spectral modulation. The analysis naturally leads us to a set of "good patterns" that minimize the optical aberrations. We then train a deep network that overcomes any residual aberrations, thereby achieving ideal spectral modulation at high spatial resolution. We show a number of unique operating points with our prototype including dynamic spectral filtering, material classification, and single- and multi-image hyperspectral imaging.
translated by 谷歌翻译
如何有效地构建和使用对话数据,以及如何在不同域中在不同域中部署模型可能是建立面向任务的对话系统的两个关键问题。在本文中,我们提出了一种新颖的手动指导对话方案,以减轻这些问题,在该方案中,代理商从对话和手册中学习任务。该手册是一个非结构化的文本文档,可指导代理在对话过程中与用户和数据库进行交互。我们提出的方案降低了对话模型对细粒领域本体的依赖性,并使它们更灵活以适应各种领域。然后,我们为完全注销的多域数据集Magdial贡献以支持我们的方案。它介绍了三个对话建模子任务:指令匹配,参数填充和响应生成。对这些子任务进行建模与人类代理的行为模式一致。实验表明,手动引导对话方案提高了构建对话系统中的数据效率和域可伸缩性。数据集和基准将公开用于促进未来的研究。
translated by 谷歌翻译
将差异化随机梯度下降(DPSGD)应用于培训现代大规模神经网络(例如基于变压器的模型)是一项艰巨的任务,因为在每个迭代尺度上添加了噪声的幅度,都具有模型维度,从而阻碍了学习能力显著地。我们提出了一个统一的框架,即$ \ textsf {lsg} $,该框架充分利用了神经网络的低级别和稀疏结构,以减少梯度更新的维度,从而减轻DPSGD的负面影响。首先使用一对低级矩阵近似梯度更新。然后,一种新颖的策略用于稀疏梯度,从而导致低维,较少的嘈杂更新,这些更新尚未保留神经网络的性能。关于自然语言处理和计算机视觉任务的经验评估表明,我们的方法的表现优于其他最先进的基线。
translated by 谷歌翻译
通过语音转换(VC)的数据增强已成功应用于仅可用于目标扬声器的中性数据时,已成功地应用于低资源表达文本到语音(TTS)。尽管VC的质量对于这种方法至关重要,但学习稳定的VC模型是一项挑战,因为在低资源场景中的数据量受到限制,并且高度表达的语音具有很大的声学变化。为了解决这个问题,我们提出了一种新型的数据增强方法,该方法结合了变化和VC技术。由于换挡数据的增强功能可以覆盖各种音高动态,因此即使只有目标扬声器中性数据的1000个话语,它也可以极大地稳定VC和TTS模型的训练。主观测试结果表明,与常规方法相比,具有拟议方法的基于快速2的情绪TTS系统改善了自然性和情绪相似性。
translated by 谷歌翻译
We propose Parallel WaveGAN, a distillation-free, fast, and smallfootprint waveform generation method using a generative adversarial network. In the proposed method, a non-autoregressive WaveNet is trained by jointly optimizing multi-resolution spectrogram and adversarial loss functions, which can effectively capture the time-frequency distribution of the realistic speech waveform. As our method does not require density distillation used in the conventional teacher-student framework, the entire model can be easily trained. Furthermore, our model is able to generate highfidelity speech even with its compact architecture. In particular, the proposed Parallel WaveGAN has only 1.44 M parameters and can generate 24 kHz speech waveform 28.68 times faster than realtime on a single GPU environment. Perceptual listening test results verify that our proposed method achieves 4.16 mean opinion score within a Transformer-based text-to-speech framework, which is comparative to the best distillation-based Parallel WaveNet system.
translated by 谷歌翻译